11 research outputs found

    A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law

    Get PDF
    Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs. As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology

    “A Change is Gonna Come:” Developing a Liability Framework for Social Media Algorithmic Amplification

    Get PDF
    From the moment social media companies like Facebook were created, they have been largely immune to suit for the actions they take with respect to user content. This is thanks to Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which offers broad immunity to sites for content posted by users. But seemingly the only thing a deeply divided legislature can agree on is that Section 230 must be amended, and soon. Once that immunity is altered, either by Congress or the courts, these companies may be liable for the decisions and actions of their algorithmic recommendation systems, artificial intelligence models that sometimes amplify the worst in our society, as Facebook whistleblower Frances Haugen explained to Congress in her testimony. But what, exactly, will it look like to sue a company for the actions of an algorithm? Whether through torts like defamation or under certain statutes, such as those aimed at curbing terrorism, the mechanics of bringing such a claim will surely occupy academics and practitioners in the wake of changes to Section 230. To that end, this Article is the first to examine how the issue of algorithmic amplification might be addressed by agency principles of direct and vicarious liability, specifically within the context of holding social media companies accountable. As such, this Article covers the basics of algorithmic recommendation systems, discussing them in layman’s terms and explaining why Section 230 reform may spur claims that have a profound impact on traditional tort law. The Article looks to sex trafficking claims made against social media companies—an area already exempted from Section 230’s shield—as an early model of how courts might address other claims against these companies. It also examines the potential hurdles, such as causation, that will remain even when Section 230 is amended. It concludes by offering certain policy considerations for both lawmakers and jurists

    ADDRESSING RACIAL DISPARITIES IN PRESCHOOL SUSPENSION AND EXPULSION RATES

    No full text
    In 2014, the Department of Education\u27s Office for Civil Rights published data for the first time that tracked preschool suspension and expulsion rates. The data was startling: not only were preschoolers being suspended and expelled, something that surprised many readers on its own, they were being suspended and expelled in racially disproportionate numbers, with African-American boys bearing the brunt of the discipline. Politicians, researchers and advocates quickly spoke out, noting that these numbers confirmed that the school to prison pipeline really starts in preschool, and calling for reform. In this Article, I explore some of the policies and practices that have led to preschool expulsions, including zero tolerance policies and the challenging behavior of preschoolers, and also offer theories on what might have led to their racially disproportionate use, including unconscious bias on the part of teachers and administrators. I also examine the tragic impact these disciplinary procedures can have on students and their families. I next examine the long odds for success that most legal challenges to racially disproportionate preschool expulsions and suspensions will face, due mostly to judicially imposed requirements that plaintiffs establish racially discriminatory intent, not just disparate outcomes. Finally, I sketch the contours of what a successful policy-based solution might look like, and how best practices from existing research and programs might be utilized to create meaningful change

    Prisoners of Fate: The Challenges of Creating Change for Children of Incarcerated Parents

    Get PDF
    Children of incarcerated parents, the invisible victims of mass incarceration, suffer tremendous physical, psychological, educational, and financial burdens—detrimental consequences that can continue even long after a parent has been released. Although these children are blameless, policy makers, judges, and prison officials in charge of visitation policies have largely overlooked them. The United States Sentencing Commission Guidelines Manual explicitly instructs judges to ignore children when fashioning their parents’ sentences, and judges have largely hewed to this policy, even in the wake of the 2005 United States v. Booker decision that made those Guidelines merely advisory, not mandatory. Although some scholars have suggested amending the Guidelines or making other legislative changes that would bring children’s interests forward at the sentencing phase, these suggestions are less likely than ever to bear fruit. In light of the Trump Administration’s “tough on crime” rhetoric, new Attorney General Jefferson Sessions’ “law and order” reputation, and Republican control of the House and Senate, policy change that is viewed as “progressive” is highly unlikely. Therefore, this Article proposes two other avenues for change. First, in a new and unique proposal, this Article suggests federal judges can and should independently order the inclusion of Family Impact Statements into a defendant’s presentence investigation report via a heretofore largely unused “catchall provision” of the Federal Rules of Criminal Procedure. Second, this Article makes three modest policy recommendations that are aimed at improving the ability of children to visit their incarcerated parents. Visitation has been shown in studies to be a powerful tool of mitigation for many of the harms children experience when their parents are incarcerated, but visitation rates are woefully low. The options for improving circumstances for children of incarcerated parents may well be limited, but there are viable options, and there is no time to waste

    Closing

    No full text

    Reprogramming Recidivism: The First Step Act and Algorithmic Prediction of Risk

    No full text

    “A Change is Gonna Come:” Developing a Liability Framework for Social Media Algorithmic Amplification

    No full text
    From the moment social media companies like Facebook were created, they have been largely immune to suit for the actions they take with respect to user content. This is thanks to Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which offers broad immunity to sites for content posted by users. But seemingly the only thing a deeply divided legislature can agree on is that Section 230 must be amended, and soon. Once that immunity is altered, either by Congress or the courts, these companies may be liable for the decisions and actions of their algorithmic recommendation systems, artificial intelligence models that sometimes amplify the worst in our society, as Facebook whistleblower Frances Haugen explained to Congress in her testimony. But what, exactly, will it look like to sue a company for the actions of an algorithm? Whether through torts like defamation or under certain statutes, such as those aimed at curbing terrorism, the mechanics of bringing such a claim will surely occupy academics and practitioners in the wake of changes to Section 230. To that end, this Article is the first to examine how the issue of algorithmic amplification might be addressed by agency principles of direct and vicarious liability, specifically within the context of holding social media companies accountable. As such, this Article covers the basics of algorithmic recommendation systems, discussing them in layman’s terms and explaining why Section 230 reform may spur claims that have a profound impact on traditional tort law. The Article looks to sex trafficking claims made against social media companies—an area already exempted from Section 230’s shield—as an early model of how courts might address other claims against these companies. It also examines the potential hurdles, such as causation, that will remain even when Section 230 is amended. It concludes by offering certain policy considerations for both lawmakers and jurists
    corecore